翻訳と辞書
Words near each other
・ Interleukin 20 receptor, beta subunit
・ Interleukin 21
・ Interleukin 22
・ Interior Lowlands
・ Interior Minister of Prussia
・ Interior Minister of the Palestinian National Authority
・ Interior minister of the Republic of Iran
・ Interior ministry
・ Interior Mountains
・ Interior Museum
・ Interior Music
・ Interior of a Restaurant in Arles
・ Interior of the cerebral hemispheres
・ Interior Plains
・ Interior Plateau
Interior point method
・ Interior portrait
・ Interior product
・ Interior protection
・ Interior Radar Bomb Scoring Site
・ Interior radiation control coating
・ Interior reconstruction
・ Interior Salish languages
・ Interior Secretary of Pakistan
・ Interior Semiotics
・ Interior Therapy with Jeff Lewis
・ Interior Township, Michigan
・ Interior with a Child Feeding a Parrot
・ Interior with a Mother close to a Cradle
・ Interior with a Woman weighing Gold Coin


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Interior point method : ウィキペディア英語版
Interior point method
Interior point methods (also referred to as barrier methods) are a certain class of algorithms that solves linear and nonlinear convex optimization problems.
John von Neumann suggested an interior point method of linear programming which was neither a polynomial time method nor an efficient method in practice. In fact, it turned out to be slower in practice compared to simplex method which is not a polynomial time method. In 1984, Narendra Karmarkar developed a method for linear programming called Karmarkar's algorithm which runs in provably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems which were beyond the capabilities of simplex method. Contrary to the simplex method, it reaches a best solution by traversing the interior of the feasible region. The method can be generalized to convex programming based on a self-concordant barrier function used to encode the convex set.
Any convex optimization problem can be transformed into minimizing (or maximizing) a linear function over a convex set by converting to the epigraph form. The idea of encoding the feasible set using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were mainly developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. sequential quadratic programming).
Yurii Nesterov and Arkadi Nemirovski came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.
Karmarkar's breakthrough revitalized the study of interior point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by polynomial complexity and, moreover, that was competitive with the simplex method.
Already Khachiyan's ellipsoid method was a polynomial time algorithm; however, it was too slow to be of practical interest.
The class of primal-dual path-following interior point methods is considered the most successful.
Mehrotra's predictor-corrector algorithm provides the basis for most implementations of this class of methods.
==Primal-dual interior point method for nonlinear optimization==
The primal-dual method's idea is easy to demonstrate for constrained nonlinear optimization.
For simplicity consider the all-inequality version of a nonlinear optimization problem:
:minimize f(x)~ subject to c_i(x) \ge 0 ~~\text~ i = 1, \ldots, m,~~ x \in \mathbb^n, where f : \mathbb^ \rightarrow \mathbb, c_i : \mathbb^ \rightarrow \mathbb~~~~~~(1).
The logarithmic barrier function associated with (1) is
:B(x,\mu) = f(x) - \mu~ \sum_^m\ln(c_i(x))~~~~~(2)
Here \mu is a small positive scalar, sometimes called the "barrier parameter". As \mu converges to zero the minimum of B(x,\mu) should converge to a solution of (1).
The barrier function gradient is
:g_b = g - \mu\sum_^m \frac \nabla c_i(x)~~~~~~(3)
where g is the gradient of the original function f(x) and \nabla c_i is the gradient of c_i.
In addition to the original ("primal") variable x we introduce a Lagrange multiplier inspired dual variable \lambda\in \mathbb ^m
:c_i(x) \lambda_i=\mu, \forall i=1,\ldots,m~~~~~~~(4)
(4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" in KKT conditions.
We try to find those (x_\mu, \lambda_\mu) for which the gradient of the barrier function is zero.
Applying (4) to (3) we get an equation for the gradient:
:g - A^T \lambda = 0~~~~~~(5)
where the matrix A is the constraint c(x) Jacobian.
The intuition behind (5) is that the gradient of f(x) should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with small \mu (4) can be understood as the condition that the solution should either lie near the boundary c_i(x) = 0 or that the projection of the gradient g on the constraint component c_i(x) normal should be almost zero.
Applying Newton's method to (4) and (5) we get an equation for (x, \lambda) update (p_x, p_\lambda):
:\begin
W & -A^T \\
\Lambda A & C
\end\begin
p_x \\
p_\lambda
\end=\begin
-g + A^T \lambda \\
\mu 1 - C \lambda
\end
where W is the Hessian matrix of B(x, \mu) and \Lambda is a diagonal matrix of \lambda and C is a diagonal matrix where C_ is c_i(x).
Because of (1), (4) the condition
:\lambda \ge 0
should be enforced at each step. This can be done by choosing appropriate \alpha:
:(x,\lambda) \rightarrow (x+ \alpha p_x, \lambda + \alpha p_\lambda).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Interior point method」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.